专利摘要:
The invention describes a method for measuring the audio description time in the Transport Flows (TS) of Digital Terrestrial Television (DTT), which comprises: obtaining the different TS from the DTT; extract information about each channel from the Event Information Tables (EIT) and from the service information tables (SI) included in said TS; extract additional information on each channel from an Electronic Program Guide (EPG) obtained from a Web Service; generate an Electronic Enriched Program Guide (EEPG) by combining the information extracted from the EIT and the additional information extracted from the EPG; separating the audio streams of each program from each channel transported by said TS according to the programming of each channel obtained from the EEPG; and analyze the audio flows corresponding to each program of each channel to obtain the audio description time. (Machine-translation by Google Translate, not legally binding)
公开号:ES2668498A1
申请号:ES201631471
申请日:2016-11-17
公开日:2018-05-18
发明作者:Juan Manuel Carrero Leal;Francisco José GONZÁLEZ LEÓN;María Belén RUIZ MEZCUA;José Manuel Sánchez Pena
申请人:REAL PATRONATO SOBRE DISCAPACIDAD;Universidad Carlos III de Madrid;
IPC主号:
专利说明:

Procedure and system to measure the audio description time in radio and television transport flows
OBJECT OF THE INVENTION
The present invention belongs to the field of digital television and radio.
The object of the present invention is a novel automatic procedure capable of measuring the audio description time in radio and digital television transport streams
BACKGROUND OF THE INVENTION
Digital radio and television are understood as the set of technologies for the emission and reception of television and radio signals in which the image, sound and other associated data are digitally encoded, as opposed to traditional analogue television and radio technologies. There are different coding standards currently accepted worldwide. In Europe, the family of implanted coding standards is DVB (Digital Video Broadcasting), used for the broadcast and reception of television and radio in digital format. The DVB family of standards includes different transmission possibilities, including DVB-T terrestrial (DVB-Terrestrial, English "OVB-Terrestria!"), DVB-S (DVBS-Satellite, English "OVB-) SateJlite "), by DVB-C Cable (from English" OVB-Cable ') or by Internet DVB-IP (DV8-Internet Protocol, from English "OVB-Internet Protoco!').
One of the main advantages of digital television and radio systems over analog systems is the ability to incorporate data streams into video and audio streams, encapsulating all of them in a transport stream or TS (from the English "Transport Stream"). "), according to a certain standard (for example, in DVB systems, following the MPEG-2 part 1 standard, ISOIIEC 13818-1). This allows to provide additional functionalities and services, such as accessibility services. Accessibility services include subtitling, especially useful for users with hearing impairment, or audio description, especially useful for users with visual impairment.
The General Law of Audiovisual Communication (LGCA establishes minimum thresholds
broadcasting of audiodescribed programs for all television channels, which impliesthe need to verify the number of audio hours described for each of the channels.Currently, the audio description verification process is carried out byhuman intervention, which implies a complete viewing of the 24 hours of5 daily programming of each channel, noting in each case if a program isaudiodescribed or not. This is a great use of human resources, since it isIt is necessary to verify all national coverage channels on a daily basis. In addition tothe workload, the complete recording of the signals of the channels distributed in thedifferent national coverage multiplex represents a very high storage level
10 raised. A single day of recording involves approximately 1.7 Tb of file volume, which makes efficient resource management very complicated.
DESCRIPTION OF THE INVENTION
The present invention solves the above problem by allowing quantitative information of the total hours of audiodescribed in each channel to be obtained automatically, without human intervention. In addition, the storage volume necessary with the process of the invention is much lower than currently required, since for each channel it is only necessary to record the audio streams, the Table of Information
20 Events (EIT) and the corresponding Program Specific Information (PSI) tables. As a consequence, the volume of stored data is reduced by approximately 95% (only about 80 Gb of daily data is stored). This allows the forecast of contingencies that force data storage for a longer time.
25 Some terms that will be used throughout the present specification are described below.
Transport Flow (Transport Stream, TS)
30 This is a communication protocol for audio, video and data specified in the MPEG-2 standards. In television, within a Transport Stream the different video, audio and data streams of the different stations that are broadcast on the same channel of the DTT signal are multiplexed. For example, in the same Transport Flow could be contained 3 stations, 2 of
35 television with their respective video streams, dubbed audio, original audio, teletext data, subtitle data, etc. and a radio station along with its corresponding audio and data. Together with the said audio, video, etc. streams, the Transport Stream contains signaling tables with information about the channels it carries.
Multiplex
The capacity of a DTT channel can be subdivided into multiple sub-channels to transmit different video information in different resolutions for fixed and mobile devices, audio and other data. When a frequency band contains multiple sub-channels it is called Multiplex.
Web service (WS, Web Service)
It is a technology used to exchange data between various applications. Web services act as intermediaries to exchange information between applications that may be developed in different languages and different platforms, using a well-defined data exchange protocol.
Event Information Table (EIT)
This is a table that contains information about the events of all the channels present within a Transport Flow. In other words, it transmits information such as start time, duration, synopsis, etc. of each of the programs transmitted through the Transport Flow.
Program Specific Information (PSI)
Service information is provided through the PSI (Program Specific Information) tables. These tables provide the necessary information so that the decoders can demultiplex the different programs contained in a Transport Flow. In the Transport Flows, the Specific Program Information is structured in five tables: PAT, PMT, NIT, CAT and TSDT.
Program Map Table (PMT)
It is another of the signaling tables sent within the data of a Transport Flow. In this case, there is a Program Map Table for each channel contained within the Transport Stream and it provides information such as the program number of that channel or the list of elementary streams (video, audio, data) that contains .
Electronic Program Guide (EPG)
It is the information shown to the user in which the different contents that are going to broadcast the television channels, programs, start and end times of the same, gender, recommended age, synopsis, etc. are specified.
A first aspect of the present invention describes a method for measuring time
audio description in the Transport Streams (TS) of Digital Terrestrial Television (DTT)
which basically includes the following steps:
1) Obtain the different Transport Flows (TS) of Digital Terrestrial Television (DTT)
This step involves the reception, tuning and recording of the different Multiplexes of
television present in the Transport Flows (TS) for further treatment and analysis. As previously mentioned, each Transport Flow (TS) includes several channels. The structure that defines the flows associated with each of these channels is transmitted in the Program Specific Information (PSI) tables. During this step, the data stored in these tables is used to correctly demultiplex the different channels present in the Transport Flow (TS).
2) Extract information about each channel from the Event Information Tables (EIT) included in these Transport Flows (TS).
As mentioned, each Transport Flow (TS) includes an Event Information Table (EIT) that contains information about the events of all the channels included in that Transport Flow (TS). This is information related to the name of the program, the start and end time, gender, or others, for each of the programs included in the channels of that Transport Flow (TS). In this step, the Event Information Table (EIT) is obtained and the information that is available on the programs containing the channels of that Transport Flow (TS) is extracted from it.
3) Extract additional information about each channel from an Electronic Program Guide (EPG) obtained from a Web Service.
Frequently, the Event Information Table (EIT) does not include complete information on all programs on all channels transmitted by a Transport Stream (TS). For this reason, a public Web Service is offered that offers an Electronic Program Guide (EPG) that contains the programming of the channels, and from which additional information that may not have been obtained from the Information Table of Information is extracted. Events (EIT).
4) Generate an Enriched Electronic Program Guide (EEPG) by combining the information extracted from Event Information Tables (EIT) and additional information extracted from the Electronic Program Guide (EPG).
The combination of the information from both sources can be carried out in different ways, although in a particular embodiment of the invention the information is based on
obtained from the Event Information Table (EIT) and, if there are gaps, these are filled in with information obtained from the Electronic Program Guide (EPG).
More specifically, during the creation of the Enriched Electronic Program Guide (EEPG) the following cases are detected and resolved:
- Time sections without data in the programming obtained from the Event Information Table (EIT). To solve this, the information provided by the Electronic Program Guide (EPG) obtained from the Web Service is used so that the Enriched Electronic Program Guide (EEPG) includes information 24 hours a day.
- Absence of gender-related data in the programming obtained from the Event Information Table (EIT). To solve it, the missing information is searched in the Electronic Program Guide (EPG) obtained from the Web Service and entered in the Enriched Electronic Program Guide (EEPG).
- Programs with a high duration. In most of these cases, these are programs formed by the consecutive issuance of a plurality of smaller programs (for example, several chapters of the same series that are broadcast consecutively). In this case, the Web Service is also used to check if the program in question can be divided into sub-programs and, in that case, the subdivide program in the Enriched Electronic Program Guide (EEPG).
5) Separate the audio streams corresponding to each program of each channel transported by said Transport Streams (TS) according to the programming of each channel obtained from the Enriched Electronic Program Guide (EEPG).
The audio streams corresponding to each specific program of each channel of the Transport Stream (TS) in question are separated according to the programming stored in the Enriched Electronic Program Guide (EEPG). Since this Enriched Electronic Program Guide (EEPG) contains complete and up-to-date information on programming, the subdivision of audio streams into sections corresponding to specific programs is done in a more precise manner, which in turn allows an analysis of the audio description time program by program.
6) Analyze the audio streams corresponding to each program of each channel to obtain the audio description time.
There are generally two ways to broadcast the audio description on Digital Terrestrial Television (DTT): "synchronization in receiver 'and" synchronization in transmitter'. In receiver synchronization, the audio description is included in a dedicated audio stream separated from the main audio stream that is only used for audio description broadcast, when there is one, and that the rest of the time is silent. In broadcast synchronization, an audio stream is broadcast containing the main audio and mixed audio description comments. For a detailed description of the current situation of audio description in Digital Terrestrial Television (DTT) channels in Spain, reference can be made to the article "Overview of audio description in TOT channels in Spain", by Francisco José González León et al. , published by the Spanish Center for Subtitling and Audio Description (CESyA).
In view of this situation, in a preferred embodiment of the invention corresponding to the case in which the audio description is emitted in synchronization mode at the receiver through an audio stream containing only the audio description, the analysis step comprises analyzing said audio stream. which contains only the audio description to determine when audio descriptions are issued. That is, this analysis is as simple as inferring that audio description exists when said dedicated audio stream is not silent and does not exist when the audio stream is silent.
Alternatively, in another preferred embodiment of the invention corresponding to the case where the audio description is emitted in sender synchronization mode through an audio stream containing the mixed main audio and audio description, the analysis step comprises:
- Subtract the main audio from the audio stream that contains the main audio and audio description mixed to obtain a resulting audio stream.
- Analyze the resulting audio stream to determine when audio descriptions are broadcast. Since the result of the subtraction described above should be a resulting audio stream containing only the audio description, this analysis step could in principle be the same as described for the previous case: there is audio description when the resulting audio stream is not silent and that does not exist when the audio stream is silent.
However, frequently the subtraction is not exact for different reasons (for example, there are no complete silences in the audio stream that contains the main audio mixed with the audio description, there is desynchronization between both signals, etc.). This may require a more complex analysis of the three signals (main audio stream, main audio stream + audio description, resulting audio stream) based on statistics of energy, power, correlation that allows to finally obtain the audio description time of each Program.
The end result of this procedure is that the audio description times are obtained.
corresponding to each of the programs of the channels included in the Flow of
Transport (TS) in question. Although this information already constitutes an important advance in relation to the current measurement systems carried out by people who must view the 24 hours of the programming of each of the channels, it is possible to further improve the conclusions taking into account the gender of each Program.
In fact, not all genres require the same amount of audio description to be properly audio-described. For example, an informative program where a fixed plane is broadcast almost continuously with the presenter hardly requires audio description, while a movie or series where the scene changes continuously requires a greater amount of audio description. With this in mind, the process of the present invention further comprises the steps of:
7) Calculate an audio description rate according to the audiodescribed time of each program.
8) Determine if the program is audiodescribed or not based on the audio description rate and the genre of the program.
Thus, the final result of the procedure is a table that indicates for each program whether it is audiodescribed or not. Human intervention is performed only as an option to verify that the results are reliable and adapt to reality. As mentioned earlier, the use of this new procedure allows to greatly reduce the workload performed and, in addition, reduce the volume of storage needed.
The described procedure is intended to be implemented with the help of processing means such as a computer or a group of computers. In this context, the term "sorted" refers in general to any type of processing medium capable of executing the process of the invention, such as: servers, dedicated processing modules, computers themselves, etc. Therefore , the invention also extends to computer programs adapted to make a computer carry out the process of the invention.The program can have the form of source code, object code, an intermediate source of code and object code, by example, as in partially compiled form, or in any other form suitable for use in the implementation of the processes according to the invention.
The aforementioned computer program may be arranged on or within a carrier, with the carrier being understood as any entity or device capable of supporting the program.
For example, the carrier could include a storage medium, for example, a ROM, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a floppy disk or a hard disk. The carrier could also be an integrated circuit in which the program is included, the integrated circuit being adapted to execute, or to be used in the execution of, the corresponding processes.
Alternatively, the carrier may be a transmissible carrier signal, for example, an electrical or optical signal that could be transported through electrical or optical cable, by radio or by any other means. When the program is incorporated into a signal that can be directly transported by a cable or other device or medium, the carrier may be constituted by said cable or other device or means.
A second aspect of the present invention is directed to a system for measuring the audio description time in the Transport Streams (T8) of Digital Terrestrial Television (DTT) capable of executing the method of any of the preceding claims. This system fundamentally includes:
a) A repository configured to receive and store the different Transport Flows (TS) of the Digital Terrestrial Television (DTT).
b) A first module configured to receive said Transport Flows (T8) from the repository and to extract information on each channel from the Event Information Tables (EIT) included in said Transport Flows (TS), extract additional information on each channel of an Electronic Program Guide (EPG) obtained from a Web Service, and generate an Enriched Electronic Program Guide (EEPG) by combining the information extracted from Event Information Tables (E1T) and additional information extracted from the Guide Program Electronics (EPG)
c) A second module configured to receive said Enriched Electronic Program Guide (EEPG), separate the audio streams corresponding to each program from each channel transported by said Transport Streams (TS) based on the programming of each channel obtained from the Enriched Electronic Program Guide (EEPG), and analyze the audio streams corresponding to each program of each channel to obtain the audio description time.
According to a preferred embodiment of the invention, the second module is further configured to calculate an audio description rate according to the audiodescribed time of
each program, and to determine if the program is audiodescribed based on the audio description rate and the genre of the program.
In yet another preferred embodiment, the system of the invention further comprises a third module configured to store information related to the presence or absence of audio description in each program.
In another preferred embodiment of the invention, the system further comprises a fourth module configured to receive, tune in and record the Transport Streams (TS).
BRIEF DESCRIPTION OF THE FIGURES
Fig. 1 shows an example of architecture of a system according to the present invention.
Fig. 2 shows a flow chart of an example procedure according to the present invention.
PREFERRED EMBODIMENT OF THE INVENTION
A particular example of implementation of the process of the invention is described below. Fig. 1 shows an example of architecture of a system according to the present invention. The following list briefly describes the elements represented:
TO DVB-T receiver
B PID multiplex selector
and Transport Flow Recorder (TS)
D Extractor of Event Information Tables (EIT)
AND Web Program Electronic Guide (EPG) Extractor
W Web service
F XML Unifier of the Enriched Electronic Program Guide (EEPG)
R Repository
H Audio Description Service Analyzer
Audio description XML
J SOL generator
K Database Loader
BD Database
N Web service
OR Reports
p Alert Generation
Module 4 (M4) receives the emitted TOT signal, tunes it, and records the different television multiplexes in the repository (R) for further processing and analysis. Module 1 (M1) receives from the repository (R) the Event Information Tables (EIT) of the multiplexes to be analyzed and generates an Enriched Electronic Program Guide (EEPG) with the help of additional data obtained from a Web Service (W) The repository (R) receives and stores the Enriched Electronic Program Guide (EEPG) obtained by module 4 (M4). Module 2 (M2) receives from the repository (R) the Enriched Electronic Program Guide (EEPG) to determine the audio description data of the different chains contained in the multiplexes analyzed. At the exit of module 2 (M2) an XML file is generated with the audio description data obtained and sent to the repository (R). Module 3 (M3) receives from the repository (R) the audio description data obtained and stores it in a database (BO) where the information contained in the XML is extracted and the accessibility monitoring of all multiplexes is stored. From this information, daily and monthly reports are generated, as well as alerts of non-compliance with the minimum audio description percentage established by the General Law of Audiovisual Communication (LGCA)
Fig. 2 shows in more detail an example of a flow chart of the process according to the present invention. Note that the blocks represented in Fig. 2 do not have to coincide exactly with the blocks that have been represented in Fig. 1. First, the system of the invention receives the OVB-T signal (1) at the input. coming from the air Next, the transport streams (TS) to be analyzed are selected and recorded (2), obtaining files (3) in untransported Transport Stream (TS) format containing the Program Specific Information (PSI) tables ) and the flows of the selected channels in the configuration. These files are temporarily stored (4) in the aforementioned repository (R), and at the same time in a parallel process they are demultiplexed (5) to obtain, on the one hand the audio streams and the Program Specific Information (PSI) tables ) (6), and on the other hand the Event Information Table (EIT) (7) that contains the daily programming broadcast by the channels present in the Transport Flow (TS). A connection is then made to a public Web Service (W) (8) that contains the daily programming of the chains, and said programming (9) is extracted. Once both schedules have been obtained, they are combined (10) to obtain rich programming (11). From rich programming
(11) And the audio streams and the Program Specific Information (PSI) tables (6), the analysis of said information (12) is carried out and, by the procedure described later in this document, extracts information about audiodescribed programming (14). This information (14) is stored (15) in the database (8D) mentioned above. In addition, the subsystem (12) also extracts statistical information from the Program Specific Information (PSI) tables and audio streams in addition to the EPG enriched with the audio description data (13) for each of the programs of each channel which is stored (4) in the repository (R). The data stored (15) in the database (8D) feed a web application (17), and are also analyzed (16) so that statistical reports (18) are obtained, non-compliance alarms (19) and signals are generated alarm (20). When one of these non-compliance alarms is detected, the data stored in the Transport Stream (T8) in the temporary data repository (R) is transferred through a network flow (21) to a permanent storage device (22) that may be used as legal evidence in case of claim.
Next, the analysis procedure regarding the presence or absence of audio description is described in greater detail. This analysis is carried out program by program in module 2 (M2) from the Enriched Electronic Program Guide (EEPG) generated in module 1 (M1).
As mentioned previously, the analysis has different characteristics depending on the type of audio description (emission / reception). In any case, the program in question is first subdivided into strips with variable statistical weights depending on the duration of the program. These strips are set to cushion the difference between the start and end times of the broadcast schedule and the actual broadcast.
In the case of mixing at reception the procedure is relatively simple. Only audio is output in the audio channel when audio description comments are played. To detect them, audio blocks of maximum size of 35 seconds are analyzed, for example 10 seconds. If a minimum energy threshold is exceeded in the analyzed block, it means that there is audio. In that case we proceed to analyze the audio block in smaller pieces less than a second, for example 200ms, to detect
more precisely the exact moment of the presence of the audio description. Once the moment at which an audio description fragment begins is found, it is saved for later insertion into the program. To do this, in order to ensure that the entire fragment is saved, a time interval is added first in front of and behind the start and end of the audio description. The saved audio fragments are labeled with the audio start time to allow their correct insertion at the exact time of the program's broadcast.
The case of mixing in emission is more complex. First, the correlation between the main and secondary audio signals is detected to see the delay of one over the other and adjust the beginning and end of the frame. Subsequently, the delay in strips of several seconds, for example 10 seconds, is also detected to adjust the signal times. For each 10 second band we obtain vectors with the samples of the signals with the main audio, secondary audio and an additional audio signal with the subtraction of the two previous signals. Next, the energy of each of the bands of both signals, main audio and secondary audio, is obtained and determined by statistical calculations if the signal of the secondary audio has the same language as the main audio. If it is confirmed that both signals have the same language, this means that the secondary audio signal may contain audio description.
To determine if this is the case, the signal containing the subtraction of the main audio and the secondary audio are analyzed according to bands less than one second, in this example, 200 millisecond strips. For each strip, the statistics of its energies are analyzed and it is determined whether it is an audio description or not. Then, as in the case of mixing at reception, an audio is generated by taking fragments before and after the start and end of the audio description and labeled with the start time of the audio to know the exact broadcast time.
Finally, based on the number of audios extracted in each of the bands in which a program is subdivided, and the genre of the program, it is determined whether or not the program is audio-described. Specifically, a minimum audio description threshold is established based on the genre of the program and it is checked whether an audio description rate obtained from the number of audios extracted in each of the bands exceeds said minimum threshold. If so, it is considered that the program in question is audiodescribed. Otherwise, it is considered that the program in question is not audiodescribed.
权利要求:
Claims (12)
[1]
1. Procedure for measuring the audio description time in the Transport Streams (TS) of the Digital Terrestrial Television (DTT), characterized in that it comprises the following steps:
- obtain the different Transport Flows (TS) of Digital Terrestrial Television (DTT); -extract information about each channel from the Event Information Tables (EIT) included in said Transport Flows (TS); -extract additional information on each channel from an Electronic Program Guide (EPG) obtained from a Web Service;
- generate an Enriched Electronic Program Guide (EEPG) by combining the information extracted from Event Information Tables (EIT) and additional information extracted from the Electronic Program Guide (EPG);
- separating the audio streams corresponding to each program from each channel transported by said Transport Streams (TS) based on the programming of each channel obtained from the Enriched Electronic Program Guide (EEPG); Y
- analyze the audio streams corresponding to each program of each channel to obtain the audio description time.
[2]
2. Procedure according to claim 1, wherein the step of generating the Enriched Electronic Program Guide (EEPG) comprises starting from the information obtained from the Event Information Table (EIT) and, if there are gaps, fill them with information obtained from the Electronic Program Guide (EPG).
[3]
3. Method according to any of the preceding claims wherein, in the event that the audio description is emitted in synchronization mode at the receiver through an audio stream containing only the audio description, the analysis step comprises analyzing said audio stream containing only the audio description to determine when audio descriptions are issued.
[4]
Four. Method according to any of the preceding claims wherein, in the event that the audio description is emitted in sender synchronization mode through an audio stream containing the mixed main audio and the audio description, the analysis step comprises:
- subtract the main audio from the audio stream containing said main audio and the audio description mixed to obtain a resulting audio stream; Y
analyze the resulting audio stream to determine when audio descriptions are broadcast.
[5]
5. Method according to any of the preceding claims, which further comprises the steps of: -calculating an audio description rate according to the audiodescribed time of each program; Y
-determine if the program is audiodescribed based on the audio description rate and the genre of the program.
[6]
6. Computer program comprising program instructions for making a computer carry out the method according to any one of claims 1 to 6.
[7]
7. Computer program according to claim 6, which is incorporated into a storage medium.
[8]
8. Computer program according to claim 6, which is supported on a carrier signal.
[9]
9. System for measuring the audio description time in the Transport Streams (TS) of Digital Terrestrial Television (DTT) capable of executing the procedure of any of claims 1-5, characterized in that it comprises:
- a repository (R) configured to receive and store the different Transport Streams (TS) of Digital Terrestrial Television (DTT);
- a first module (M1) configured to receive said Transport Flows (TS) from the repository (R) and to extract information on each channel of the Event Information Tables (EIT) included in said Transport Flows (TS), extract information additional information on each channel of an Electronic Program Guide (EPG) obtained from a Web Service, and generate an Enriched Electronic Program Guide (EEPG) by combining the information extracted from Event Information Tables (EIT) and additional information extracted from the Electronic Program Guide (EPG); Y
- a second module (M2) configured to receive said Enriched Electronic Program Guide (EEPG), separate the audio streams corresponding to each program of each channel transported by said Transport Streams (TS) according to the programming of each channel obtained from the Enriched Electronic Program Guide (EEPG), and analyze the audio streams corresponding to each program of each channel to obtain the audio description time.
[10]
10. System according to claim 10, wherein the second module (M2) is further configured to calculate an audio description rate according to the audiodescribed time of each program, and determine if the program is audiodescribed according to
5 of the audio description rate and the genre of the program.
[11]
11. System according to claim 10, further comprising a third module (M3) configured to store information regarding the presence or absence of audio description in each program.
[12]
12. System according to any of claims 10-11, further comprising a fourth module (M4) configured to receive, tune and record the Transport Flows (TS).
类似技术:
公开号 | 公开日 | 专利标题
US9860611B2|2018-01-02|Broadcast service transmitting method, broadcasting service receiving method and broadcast service receiving apparatus
EP3490174A1|2019-05-29|Transmitting method, receiving method, transmitting device, and receiving device
CN103283220A|2013-09-04|Method for transmitting broadcast service, method for receiving the broadcasting service, and apparatus for receiving the broadcasting service
US8171308B2|2012-05-01|Digital broadcasting system and method of processing data in digital broadcasting system
EP3270601B1|2020-10-21|Self-adaptive streaming medium processing method and apparatus
KR100837720B1|2008-06-13|Method and Apparatus for synchronizing data service with video service in Digital Multimedia Broadcasting and Executing Method of Data Service
CA2795191C|2016-11-29|Method and apparatus for processing non-real-time broadcast service and content transmitted by broadcast signal
US10715845B2|2020-07-14|Broadcast signal transmission/reception device and method
CN102301701B|2014-11-26|Transmitting/receiving system and method of processing data in the transmitting/receiving system
KR101577815B1|2015-12-15|Stream Generator for Digital Broadcasting and Method Thereof
ES2668498B1|2019-03-01|Procedure and system to measure audio description time in radio and television transport flows
CN102833588A|2012-12-19|Transmission apparatus, reception apparatus, broadcast system, transmission method, reception method, and program therefor
KR20100025689A|2010-03-10|Broadcast receiver and method for offering epg of acap channel
KR20160065510A|2016-06-09|Method for producing video file or streaming packet including non-audible sound wave and television broadcasting system using the method
US10305722B2|2019-05-28|Apparatus and method for transmitting or receiving broadcast signal |
BR112012008175B1|2021-02-09|systems and methods for transmitting media content by means of digital broadcast transmission for synchronized rendering by a receiver
US20130227606A1|2013-08-29|Server apparatus and terminal apparatus
ES2482840B1|2015-04-20|Procedure and device for measuring subtitling time in radio and television transport flows
CN108028708A|2018-05-11|Data processing equipment and data processing method
JP2011223353A|2011-11-04|Digital-broadcast-data transmitting method
KR102128438B1|2020-06-30|A storage and transmission apparatus for each analysis data including synchronization time information of a multi-channel transport stream
ES2705035T3|2019-03-21|Method and device to convert an RDI data stream into an ETI data stream
WO2020125704A1|2020-06-25|Broadcast signal receiving apparatus and broadcast signal receiving method, and viewer attribute determination system and viewer attribute determination method
KR101559877B1|2015-10-13|Method for producing video file or streaming packet including non-audible sound wave and television broadcasting system using the method
US10321205B2|2019-06-11|Method for broadcasting an alert service
同族专利:
公开号 | 公开日
ES2668498B1|2019-03-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
ES2482840A1|2012-10-17|2014-08-04|Universidad Carlos Iii De Madrid|Method and device for measuring subtitling times in radio and television transport streams|
法律状态:
2019-03-01| FG2A| Definitive protection|Ref document number: 2668498 Country of ref document: ES Kind code of ref document: B1 Effective date: 20190301 |
优先权:
申请号 | 申请日 | 专利标题
ES201631471A|ES2668498B1|2016-11-17|2016-11-17|Procedure and system to measure audio description time in radio and television transport flows|ES201631471A| ES2668498B1|2016-11-17|2016-11-17|Procedure and system to measure audio description time in radio and television transport flows|
[返回顶部]